Skip to content

Conversation

@sanderdemeyer
Copy link
Contributor

Add an extension using GenericLinearAlgebra and GenericSchur to be able to deal with BigFloats. Some comments are in order.

  • Currently, only the QR, LQ, SVD, eig, and eigh decompositions are defined, the rest is not. Also qr_null and lq_null are not implemented.
  • Both qr_full and qr_compact are defined. In this implementation, the former is based on the latter, where a unitary is constructed via a Gram-Schmidt procedure. Hence, the Q resulting from qr_full is not unique, and in the tests, I only cover the columns of Q arising from the qr_compact. The same holds for the LQ decomposition.
  • In the QR and LQ decompositions, only pivoted = false and blocksize = 1 are supported. The other options throw an explicit error.
  • In some tests, I had to change === to \approx, since they would otherwis fail. I don't know whether this has an easy solution, or whether this is something that can not be enforced in the generic case.
  • GenericLinearAlgebra.jl has all required functions, apart from the eigendecomposition for non-hermitian matrices (although eigvals is supported for non-hermitian matrices). Hence, I added GenericSchur.jl to cover this.
  • One of the main reasons I want to have support for BigFloats in MatrixAlgebraKit is to have support for the exponential of a matrix of BigFloats. This functionality is currently not defined here, but in TensorKit.jl. The only thing that is called there is LinearAlgebra.exp!. Since it might be cleaner to only define an extension for MatrixAlgebraKit and not for TensorKit, it might be beneficial to move the definition of exp! to MatrixAlgebraKit. If there are alternatives or suggestions, please let me know.

Add extension using GenericLinearAlgebra and GenericSchur to be able to deal with `BigFloat`s
@codecov
Copy link

codecov bot commented Oct 29, 2025

Codecov Report

❌ Patch coverage is 95.77465% with 3 lines in your changes missing coverage. Please review.

Files with missing lines Patch % Lines
ext/MatrixAlgebraKitGenericLinearAlgebraExt.jl 95.23% 3 Missing ⚠️
Files with missing lines Coverage Δ
ext/MatrixAlgebraKitGenericSchurExt.jl 100.00% <100.00%> (ø)
src/MatrixAlgebraKit.jl 100.00% <ø> (ø)
src/interface/decompositions.jl 52.63% <ø> (ø)
ext/MatrixAlgebraKitGenericLinearAlgebraExt.jl 95.23% <95.23%> (ø)
🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Copy link
Member

@lkdvos lkdvos left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks really cool, very nice work!

I have a couple questions that are mostly due to the fact that I am less familiar with these packages:

Do these packages have to come together, or are they really separate entities? From a quick glance it seems like GenericSchur is being used for eig, while GenericLinearAlgebra is used for the rest. I was under the impression that GenericLinearAlgebra also had a non-symmetric eigenvalue implementation but that might be wrong.
Anyways, the reason I'm asking is because I was wondering if it would make sense to simply have one extension for each, rather than requiring both.

It looks slightly odd to me to call these algorithms BigFloat_*, since IIRC these should work for generic element types, right? How would you feel about GenericLinearAlgebra_HouseholderQR, or simply GLA_HouseholderQR etc? Similarly, I think there are some type restrictions on the functions that are might not be required.

Considering the allocation of the output, do you know if we can pass the output to the implementations of these libraries? If not, that's definitely also okay, but in that case we should probably intercept allocating them in the first place, if they will be ignored. This can be achieved by overloading initialize_output(...) -> Nothing and altering the checks accordingly.


For the tests, should we try and invest in reducing the code duplication a bit for the different algorithms? Would it make sense to simply add the eltypes to the already existing tests? This also relates to the GPU tests, @kshyatt might be interested in this as well.


I left some other comments throughout the code, but overall this looks like an amazing addition.

@lkdvos lkdvos linked an issue Oct 29, 2025 that may be closed by this pull request
@kshyatt
Copy link
Member

kshyatt commented Oct 30, 2025

Re the test duplications, I think it could well make sense to do something similar to what GPUArrays does with its test suite: define compatible array and element type pairs and run the generically written testsuite on them. But that shouldn't hold this PR back, should we maybe open an issue for it?

@kshyatt
Copy link
Member

kshyatt commented Oct 30, 2025

WRT the tests, if qr and svd are supported, can higher level factorizations that use them (orthnull and buddies) also be tested with BigFloat?

@kshyatt
Copy link
Member

kshyatt commented Oct 30, 2025

In some tests, I had to change === to \approx

Does this mean that copies are happening where they shouldn't? What happens if you use == rather than ===?

@sanderdemeyer
Copy link
Contributor Author

sanderdemeyer commented Oct 30, 2025

I was under the impression that GenericLinearAlgebra also had a non-symmetric eigenvalue implementation but that might be wrong.

GenericLinearAlgebra supports the non-symmetric eigenvalue decomposition only when BigFloats aren't being used. For non-symmetric BigFloat matrices it throws an explicit error. In that regard, I would be fine with splitting this into two extensions.

Does this mean that copies are happening where they shouldn't? What happens if you use == rather than ===?

The == works, so indeed I'll check again whether I made some unintentional copies.

WRT the tests, if qr and svd are supported, can higher level factorizations that use them (orthnull and buddies) also be tested with BigFloat?

I'll take a look whether we can support them here. Probably the higher level factorizations should be fine, but things like the Schur decomposition or gen_eig might be less trivial.

m, n = size(A)
k = min(m, n)
computeR = length(R) > 0
Q̃, R̃ = qr(A)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This can definitely be improved. GenericLinearAlgebra.jl has both a qrUnblocked! and a qrBlocked!. They act similar to Lapack, i.e. after calling F = qrUnblocked!(A), the elements on and above the diagonal of A (with F.factors === A) encode R, whereas the elements below the diagonal of A as well as a new vector F.τ encode the Householder reflectors that encode Q. From this, you can reconstruct both qr_compact!, qr_full! as well as qr_null!.

However, all of this is quite cryptic if you are not familiar with these ancient LAPACK storage strategies.

@Jutho Jutho mentioned this pull request Oct 31, 2025
@sanderdemeyer
Copy link
Contributor Author

I think I have now resolved all comments, apart from the comment from @Jutho about qr_Unblocked! and qr_Blocked!, and the comment from @lkdvos about the unnecessary allocations. I have opened an issue in both GenericLinearAlgebra.jl and GenericSchur.jl to support passing the output to these functions. Since we have an alternative (at least for the QR) with the new implemention of #90 that works generically and without extra allocations, I propose that we merge this PR in its current form. I can then also bring TensorKit.exp here, which was the ultimate goal. If the issues in the external packages are resolved, we can open a new PR to solve it here in the extensions as well.

If there are any additional comments, please let me know.

@sanderdemeyer
Copy link
Contributor Author

I'll take a look at these comments tomorrow. As for the allocations, I agree that the current implementation is suboptimal. My points was that this requires us indeed to intercept these allocations and change the tests (something also a little bit more annoying in the case of LQ, since we are using LQViaTransposedQR for this, and we have to intercept it internally), which is work that will just have to be reverted once the issues from the external packages are resolved.

@Jutho
Copy link
Member

Jutho commented Nov 3, 2025

I had a look at GenericSchur.jl, which I was much less familiar with than GenericLinearAlgebra.jl. That seems to reimplement a whole lot of LAPACK functionality related to Schur and eigenvalue decompositions, and so it might be possible to write the wrappers for schur_..., eig_ and eigh_ in a manner that is much closer in spirit to the LAPACK algorithms, and in particular writes directly to the given output arrays.

So I think it would definitely be good to split this PR into two parts corresponding to the two extensions.

GenericSchur.jl seems to provide both a QRIteration (which equals the current Francis name) and a DivideAndConquer implementation for the hermitian eigenvalue problem, and the standard QRIteration algorithm for the nonhermitian one (which for one or the other reason is not called as such in LAPACK). So we might want to think about the best naming strategy for these algorithms as well. I would definitely not call them GLA_... if they come from GenericSchur.jl, as that package might be worth while using in its own.

In case anyone is interested in reading more about the QR iteration method, Chapter 4 in https://people.inf.ethz.ch/arbenz/ewp/Lnotes/lsevp.pdf is really good.
My old implementation (which I still plan to include as well), is based on that.

Name change
some copying issues resolved
GS and GLA algorithms are now defined in the docs
`GLA_QR_Householder` is now `GLA_HouseholderQR` to be consistent with the LAPACK algorithms
@sanderdemeyer
Copy link
Contributor Author

I've now taken care of the 4 TODOs and the other comments.

I've renamed the algorithms to GLA_HouseholderQR, GLA_QRIteration, and GS_QRIteration to be as general and compatible with the other algorithms as possible. The folder is now split into two, namely genericlinearalgebra/ and genericschur/, i.e. one for each extension.

After a conversation with Jutho, we concluded that we think the best way forward is to resolve the two remaining points later. E.g.,

I had a look at GenericSchur.jl, which I was much less familiar with than GenericLinearAlgebra.jl. That seems to reimplement a whole lot of LAPACK functionality related to Schur and eigenvalue decompositions, and so it might be possible to write the wrappers for schur_..., eig_ and eigh_ in a manner that is much closer in spirit to the LAPACK algorithms, and in particular writes directly to the given output arrays.
So I think it would definitely be good to split this PR into two parts corresponding to the two extensions.

can be another PR where new algorithms are added, and

Considering the allocation of the output, do you know if we can pass the output to the implementations of these libraries? If not, that's definitely also okay, but in that case we should probably intercept allocating them in the first place, if they will be ignored. This can be achieved by overloading initialize_output(...) -> Nothing and altering the checks accordingly.

is now partly solved by using svd!, eigen!, eigvals!, and qr!. The problem will be solved completely once the other packages support passing the allocated output to their respective functions. I've opened an issue in both GenericSchur.jl (to which they have already answered) and GenericLinearAlgebra.jl, so hopefully this will be resolved relatively soon.

If anyone disagrees with this approach, or has some additional suggestions, please let me know.

@lkdvos
Copy link
Member

lkdvos commented Nov 4, 2025

I hope you don't mind, but I just went in and added the implementation of some of the comments that had already been discussed above and probably got overlooked, and I also simplified some of the code (based on suggestions that were already made above).

Some remarks for the implementations:

  • svd has a full argument, which seems to be supported by GLA, and with the rmul!(one!(Q), Qprime) implementation we don't need the gramm schmidt procedure.
  • I ended up not implementing/calling check_input, which I would argue is fair since these checks should already be handled by the library that we are wrapping. It could be straightforward to add this back if we feel this is necessary.

@lkdvos
Copy link
Member

lkdvos commented Nov 4, 2025

Would be good to get an additional review from @Jutho before merging, but this looks good to go for me.

@Jutho
Copy link
Member

Jutho commented Nov 4, 2025

Aside from the small comments above, I am a bit torn about is about registering these methods as the default_.... for StridedMatrix{BigFloat} and StridedMatrix{Complex{BigFloat}}.

On the one hand, they probably work for a larger set of types, such as Float16 eltype, or non-strided matrices.

On the other hand, especially with having in mind that we might have alternative Native_ algorithms for some of those, I am wondering if this is the best choice.

However, without these defaults in place, you would not be able to compute a matrix factorization with BigFloat entries without specifying the GLA_... algorithm explicitly, which is probably not what you want in generic code. If you explicitly load GenericLinearAlgebra, you probably want to use it.

I am however wondering what happens if you are using MatrixAlgebraKit and some other packages, some of which have GenericLinearAlgebra.jl as a dependency. Then also the package extension is loaded without explicit consent I assume?

@lkdvos
Copy link
Member

lkdvos commented Nov 5, 2025

I addressed the remainder of the comments since I was already busy anyways.

Considering the package dependency being loaded, I think your assumption is true, if anyone has this as a dependency somewhere, the code will be available and the default algorithms will be registered.
While I would in time also prefer to have our own native implementation as the default, I don't see too much harm in registering these default algorithms either.
We can still swap them afterwards?

What is interesting though is that GenericLinearAlgebra is definitely doing quite strong type piracy on the regular LinearAlgebra methods, and is actually simply hijacking them.
One alternative strategy therefore could be to, instead of extending GenericLinearAlgebra, simply extending LinearAlgebra itself, and using that as a fallback.
(IIRC, GenericLinearAlgebra.svd is actually just LinearAlgebra.svd imported).
There could be an argument about that being fair as a fallback too...

In any case, I don't think changing this in upcoming releases really matters too much, and even if this counts as a breaking change, it shouldn't really trickle too far downstream anyways.

@sanderdemeyer
Copy link
Contributor Author

I hope you don't mind, but I just went in and added the implementation of some of the comments that had already been discussed above and probably got overlooked, and I also simplified some of the code (based on suggestions that were already made above).

Not at all, thanks!

While I would in time also prefer to have our own native implementation as the default, I don't see too much harm in registering these default algorithms either.
We can still swap them afterwards?

I agree with Lukas here.

@Jutho
Copy link
Member

Jutho commented Nov 5, 2025

What is interesting though is that GenericLinearAlgebra is definitely doing quite strong type piracy on the regular LinearAlgebra methods, and is actually simply hijacking them. One alternative strategy therefore could be to, instead of extending GenericLinearAlgebra, simply extending LinearAlgebra itself, and using that as a fallback. (IIRC, GenericLinearAlgebra.svd is actually just LinearAlgebra.svd imported). There could be an argument about that being fair as a fallback too...

They are aware of this, and there is a PR to change it (JuliaLinearAlgebra/GenericLinearAlgebra.jl#147 ) but clearly that is not what people want, since indeed everyone wants the same call to just work using the fastest algorithm irrespective of the element type.

@Jutho
Copy link
Member

Jutho commented Nov 5, 2025

Ok, I think this is ready so I will merge.

@Jutho Jutho merged commit 5d36c98 into QuantumKitHub:main Nov 5, 2025
10 checks passed
@lkdvos lkdvos mentioned this pull request Nov 14, 2025
lkdvos referenced this pull request Nov 14, 2025
* Bump v0.6

* rename `gaugefix` -> `fixgauge`

* reduce unnecessary warnings

* fix `copy_input` signatures in Mooncake tests

* Add changelog to docs
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Feature request] Add support for BigFloats

4 participants